Many Click-Through Rate (CTR) prediction works focused on designing advanced architectures to model complex feature interactions but neglected the importance of feature representation learning, e.g., adopting a plain embedding layer for each feature, which results in sub-optimal feature representations and thus inferior CTR prediction performance. For instance, low frequency features, which account for the majority of features in many CTR tasks, are less considered in standard supervised learning settings, leading to sub-optimal feature representations. In this paper, we introduce self-supervised learning to produce high-quality feature representations directly and propose a model-agnostic Contrastive Learning for CTR (CL4CTR) framework consisting of three self-supervised learning signals to regularize the feature representation learning: contrastive loss, feature alignment, and field uniformity. The contrastive module first constructs positive feature pairs by data augmentation and then minimizes the distance between the representations of each positive feature pair by the contrastive loss. The feature alignment constraint forces the representations of features from the same field to be close, and the field uniformity constraint forces the representations of features from different fields to be distant. Extensive experiments verify that CL4CTR achieves the best performance on four datasets and has excellent effectiveness and compatibility with various representative baselines.
translated by 谷歌翻译
Dynamic interaction graphs have been widely adopted to model the evolution of user-item interactions over time. There are two crucial factors when modelling user preferences for link prediction in dynamic interaction graphs: 1) collaborative relationship among users and 2) user personalized interaction patterns. Existing methods often implicitly consider these two factors together, which may lead to noisy user modelling when the two factors diverge. In addition, they usually require time-consuming parameter learning with back-propagation, which is prohibitive for real-time user preference modelling. To this end, this paper proposes FreeGEM, a parameter-free dynamic graph embedding method for link prediction. Firstly, to take advantage of the collaborative relationships, we propose an incremental graph embedding engine to obtain user/item embeddings, which is an Online-Monitor-Offline architecture consisting of an Online module to approximately embed users/items over time, a Monitor module to estimate the approximation error in real time and an Offline module to calibrate the user/item embeddings when the online approximation errors exceed a threshold. Meanwhile, we integrate attribute information into the model, which enables FreeGEM to better model users belonging to some under represented groups. Secondly, we design a personalized dynamic interaction pattern modeller, which combines dynamic time decay with attention mechanism to model user short-term interests. Experimental results on two link prediction tasks show that FreeGEM can outperform the state-of-the-art methods in accuracy while achieving over 36X improvement in efficiency. All code and datasets can be found in https://github.com/FudanCISL/FreeGEM.
translated by 谷歌翻译
$ \ mathbf {perive} $:使用人工智能(AI)到:(1)从相对较大的人群中利用视神经头(ONH)的生物力学知识; (2)评估ONH的单个光学相干断层扫描(OCT)扫描的稳健性; (3)确定哪些关键的三维(3D)结构特征使给定的ONH稳健。 $ \ Mathbf {Design} $:回顾性横断面研究。 $ \ mathbf {Methods} $:316个受试者通过Ophthalmo-Dynamometry在急性眼内和之后与OCT成像。然后将IOP诱导的椎板胶状变形映射为3D,并用于对ONH进行分类。 LC变形高于4%的人被认为是脆弱的,而变形较低的人则较低4%。从这些数据中学习,我们比较了三种AI算法,以严格地从基线(未呈现的)OCT卷中预测鲁棒性:(1)随机森林分类器; (2)自动编码器; (3)动态图CNN(DGCNN)。后一种算法还使我们能够确定哪些关键的3D结构特征使给定的智能稳定。 $ \ mathbf {结果} $:所有3种方法都能够单独预测3D结构信息的稳健性,而无需执行生物力学测试。 DGCNN(接收器操作曲线下的区域[AUC]:0.76 $ \ pm $ 0.08)的表现优于自动编码器(AUC:0.70 $ \ pm $ 0.07)和随机森林分类器(AUC:0.69 $ \ pm $ 0.05)。有趣的是,为了评估稳健性,DGCNN主要使用了巩膜和LC插入部位的信息。 $ \ mathbf {结论} $:我们提出了一种AI驱动的方法,可以仅从ONH的单个OCT扫描中评估给定ONH的稳健性,而无需进行生物力学测试。纵向研究应确定ONH鲁棒性是否可以帮助我们确定快速的视野损失进展者。
translated by 谷歌翻译
目的:(1)开发深度学习算法,以识别3D光学相干断层扫描(OCT)扫描中的视神经头(ONH)的主要组织结构; (2)利用这些信息在健康,光盘博森(奇数)和乳头膜ONHS之间鲁棒地区分。由于高颅内压(51只眼)和健康对照(100只眼睛),这是一种横截面对比研究,由于高颅内压(51只眼睛),以及健康的对照(100只眼)。使用OCT获得ONH的3D扫描,然后加工以改善深层组织可见性。首先,使用984 B-Scans(从130只眼睛)开发了深度学习算法,以识别:主要的神经/结缔组织和奇数区域。使用骰子系数(DC)评估我们的算法的性能。在第2步骤中,使用1500Ct卷设计了一个分类算法(随机林),以严格从其德鲁森和普拉拉马那肿胀得分(来自细分)来执行3级分类(1:奇数,2:Papilledema,3:健康) )。为了评估性能,我们报告了每个类的接收器操作特征曲线(AUC)下的区域。我们的分割算法能够在存在时隔离神经和结缔组织和奇数区域。这是在测试集上的平均DC为0.93 $ 0.03的平均直流,相应于良好性能。分类是用高AUC的分类,即检测奇数,0.99美元0.01 0.01美元,用于检测Papilledema的0.99美元,0.98美元$ 0.02用于检测健康的ONH。我们的AI方法可以使用单个OCT扫描来准确地歧视奇数乳头。我们的分类表现非常出色,有需要在更大的人口中验证。我们的方法可能有可能建立10月作为神经眼科诊断成像的主干。
translated by 谷歌翻译
目的:评估中央视网膜血管躯干及其分支(CRVT&B)的三维(3D)结构构型是否可用作青光眼的诊断标志物。方法:我们训练了深度学习网络,从光神经头(ONH)的光学相干断层扫描(OCT)体积的B-Scans自动分割CRVT&B。随后,使用从OCT体积中提取的CRVT&B的结构构型,两种不同的方法用于青光眼诊断。在第一种方法中,我们旨在仅使用CNN的3D CNN和CRVT&B的3D结构提供诊断。对于第二种方法,我们将CRVT&B的3D结构投射到三个平面上以获得2D图像,然后使用2D CNN进行诊断。使用骰子系数评估分割精度,而使用接收器操作特性曲线(AUC)下的区域评估诊断准确度。 CRVT&B的诊断性能也与视网膜神经纤维层(RNFL)厚度进行了比较。结果:我们的分割网络能够从OCT扫描有效地分段视网膜血管。在测试集上,我们实现了0.81 \ PM0.07的骰子系数。 3D和2D诊断网络能够将青光眼与非青光眼受试者区分别分别区分82.7%和83.3%的精度。 CRVT&B的相应AUC为0.89和0.90,高于用RNFL厚度获得的0.90℃。结论:我们的工作表明,CRVT&B的诊断功能优于金标 - 标准的青光眼参数,即RNFL厚度。我们的作品还建议主要视网膜血管形成骨架 - 其配置可以代表主要的ONH结构变化,通常观察到青光眼的开发和进展。
translated by 谷歌翻译
疟疾是一种威胁生命的疾病,影响了数百万。基于显微镜的薄膜评估是(i)确定疟疾物种和(ii)定量高寄生虫感染的标准方法。通过机器学习(ML)对疟疾显微镜的完全自动化是一项具有挑战性的任务,因为预先准备的滑动在质量和表现方面差异很大,并且伪像通常超过相对较少的寄生虫。在这项工作中,我们描述了一个用于薄膜疟疾分析的完整,完全自动化的框架,该框架应用了ML方法,包括卷积神经网(CNN),该方法在大型且多样化的田间预先准备的薄膜数据集中进行了训练。定量和物种鉴定结果几乎足够准确地满足了耐药性监测和临床用例的混凝土需求。我们将方法和性能指标集中在现场用例要求上。我们讨论了将ML方法应用于疟疾显微镜的关键问题和重要指标。
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译